Current Issue : January - March Volume : 2016 Issue Number : 1 Articles : 4 Articles
Surveillance systems capable of autonomously monitoring vast areas are an emerging trend, particularly when wide-angle cameras\nare combined with pan-tilt-zoom (PTZ) cameras in a master-slave configuration. The use of fish-eye lenses allows the master\ncamera to maximize the coverage area while the PTZ acts as a foveal sensor, providing high-resolution images of regions of interest.\nDespite the advantages of this architecture, the mapping between image coordinates and pan-tilt values is the major bottleneck\nin such systems, since it depends on depth information and fish-eye effect correction. In this paper, we address these problems\nby exploiting geometric cues to perform height estimation. This information is used both for inferring 3D information from a\nsingle static camera deployed on an arbitrary position and for determining lens parameters to remove fish-eye distortion. When\ncompared with the previous approaches, our method has the following advantages: (1) fish-eye distortion is corrected without\nrelying on calibration patterns; (2) 3D information is inferred from a single static camera disposed on an arbitrary location of the\nscene....
The aim of this work was the implementation of an identification and evaluation\nmethodology for external potato damage detection. A system for automatic image\nrecognition was used and a methodology validation, by comparison with human\nvisual selection, was performed.\nThe potato surface image was acquired with a monochromatic video camera that\noperated in the visible spectrum and in the near infrared. This device was\nconnected to a frame grabber card, interfaced to a PC, for image acquisition and\nelaboration.\nThe software for the image elaboration has enabled the definition of algorithms\nfor the automatic recognition and measurement of the damaged area. The obtained\ndata were compared with the human visual evaluation demonstrating an adequate\nlevel of reliability of the applied methodology....
3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years\nwe witness an increasing interest from the industrial community.This interest is driven by the recent advances in 3D technologies,\nwhich enable high precisionmeasurements at an affordable cost.With 3D vision techniqueswe can conduct advancedmanufactured\nparts inspections andmetrology analysis.However,we are not able to detect subsurface defects. This kind of detection is achieved by\nother techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared\nmultimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation\n(NDT&E) applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the\nsame process. Experimental tests were conducted with different materials. The obtained results are promising and show how these\nnew techniques can be used efficiently in a combinedNDT&E-Metrology analysis of manufactured parts, in areas such as aerospace\nand automotive....
This paper describes industrial sorting system, which is based on robot vision technology, introduces\nmain image processing methodology used during development, and simulates algorithm\nwith Matlab. Besides, we set up image processing algorithm library via C# program and realize\nrecognition and location for regular geometry workpiece. Furthermore, we analyze camera model\nin vision algorithm library, calibrate the camera, process the image series, and resolve the identify\nproblem for regular geometry workpiece with different colours....
Loading....